Thus far in our course we have done both classification and regression analyses. For our classification, the data has been one of two types:
The primary difference in the FCN vs CNN approach is the following:
In this workbook we will deal with a different sort of dataset: ordered sequences. We will focus initially on classification of time sequences, but will extend this to classification of text sequences.
This workbook is based on examples and code from these sources:
The analysis we will do in this workbook deals with data collected from smartphones carried by human subjects engaged in normal daily activities: walking, sitting, jogging, standing, climbing upstairs, or descending downstairs. The smartphones were carried in the front pants pocket by 36 subjects. The raw data collected are the accelerometer readings in the x,y, and z directions (relative to the smartphones) collected at 50 Hz (or 50 steps/second).
Given the phone orientation when a person is standing:

A paper describing the results can be found here.
A video of the activity can be found here.
The data can be found here: http://www.cis.fordham.edu/wisdm/dataset.php
I have placed this in the project area on OSC at this location: /fs/ess/PAS2038/PHYSICS5680_OSU/data/WISDM/WISDM_ar_v1.1/WISDM_ar_v1.1_raw.txt
There is a text file in the same location that has more descriptive information about the data.
The columns in the above files are the following:
The acceleration in the x direction as measured
by the android phone's accelerometer.
A value of 10 = 1g = 9.81 m/s^2, and
0 = no acceleration.
The acceleration recorded includes gravitational
acceleration toward the center of the Earth, so
that when the phone is at rest on a flat surface
the vertical axis will register +-10.
Let's read the data in so we can explore it.
import pandas as pd
import numpy as np
#
# Use this to convert text to floating point
def convert_to_float(x):
try:
return np.float(x)
except:
return np.nan
column_names = ['user-id',
'activity',
'timestamp',
'x-axis',
'y-axis',
'z-axis']
df = pd.read_csv('/fs/ess/PAS2038/PHYSICS5680_OSU/data/WISDM/WISDM_ar_v1.1/WISDM_ar_v1.1_raw.txt',
header=None,
names=column_names)
# Last column has a ";" character which must be removed ...
df['z-axis'].replace(regex=True,
inplace=True,
to_replace=r';',
value=r'')
# ... and then this column must be transformed to float explicitly
df['z-axis'] = df['z-axis'].apply(convert_to_float)
# This is very important otherwise the model will not fit and loss
# will show up as NAN
#
# Get rid if rows wth missing data
df.dropna(axis=0, how='any', inplace=True)
print(df.head())
user-id activity timestamp x-axis y-axis z-axis 0 33 Jogging 49105962326000 -0.694638 12.680544 0.503953 1 33 Jogging 49106062271000 5.012288 11.264028 0.953424 2 33 Jogging 49106112167000 4.903325 10.882658 -0.081722 3 33 Jogging 49106222305000 -0.612916 18.496431 3.023717 4 33 Jogging 49106332290000 -1.184970 12.108489 7.205164
Lets make some simple plots which show how many samples are in the data for:
from matplotlib import pyplot as plt
%matplotlib inline
# Show how many training examples exist for each of the six activities
df['activity'].value_counts().plot(kind='bar',
title='Training Examples by Activity Type')
plt.show()
# Better understand how the recordings are spread across the different
# users who participated in the study
#print(df['user-id'].value_counts())
df['user-id'].value_counts().plot(kind='bar',
title='Training Examples by User')
plt.show()
We need to split the data into test and train data. The way we will do this is to see how much time each user spends in each activity. We will use defaultdicts to keep track of this.
Also, since each "step" in time is 1/50.0 of a second, we divide by 50 to convert steps to seconds (so activityStepsByUser is actual activity by user in seconds).
Be patient! This next code block takes a minute or so. There are 1.1M lines in the data file!
#
# Used to implement the multi-dimensional counter we need in the performance class
from collections import defaultdict
from functools import partial
from itertools import repeat
def nested_defaultdict(default_factory, depth=1):
result = partial(defaultdict, default_factory)
for _ in repeat(None, depth - 1):
result = partial(defaultdict, result)
return result()
#
activitySteps = defaultdict(list)
activityStepsByUser = defaultdict(partial(defaultdict, float))
activitiesByUser = defaultdict(int)
oldActivity = ''
oldUser = -1
startTime = -1
num = 0
steps = 0
activities = set()
#
# Loop over each row in our dataset
for index, row in df.iterrows():
#
# Data from the current row
activity = row['activity']
activities.add(activity)
user = row['user-id']
currentTime = row['timestamp']
steps += 1
#print("user,activity,ctime",user,activity,steps)
#
# Is the activty of this row different from our last row? How about the user?
# If either of these change, collect data
if activity != oldActivity or user != oldUser:
#
# If oldUser is less than zero then we have not started collecting data yet!
if oldUser >= 0:
#
# Something changed, so store the old data, and reset the variables to the current activity/user
activitySteps[oldActivity].append(steps/50.0)
activityStepsByUser[oldUser][oldActivity] += steps/50.0
activitiesByUser[oldUser] += 1
oldActivity = activity
oldUser = user
startTime = currentTime
steps = 0
else:
oldUser = user
startTime = currentTime
steps = 0
num += 1
if num%50000 == 0:
print("... processed lines:",num)
... processed lines: 50000 ... processed lines: 100000 ... processed lines: 150000 ... processed lines: 200000 ... processed lines: 250000 ... processed lines: 300000 ... processed lines: 350000 ... processed lines: 400000 ... processed lines: 450000 ... processed lines: 500000 ... processed lines: 550000 ... processed lines: 600000 ... processed lines: 650000 ... processed lines: 700000 ... processed lines: 750000 ... processed lines: 800000 ... processed lines: 850000 ... processed lines: 900000 ... processed lines: 950000 ... processed lines: 1000000 ... processed lines: 1050000
Now we can printout how much time each user spends in each activity. We see that some of the participants don't spend any time in some of the activites. we also note that if we order the particpants by user-id, that approximately every 4 users corresponds to about 10% of the data.
Also note that the amount of time each user spends in each ativity is typically about 50-60s.
import plotly.express as px
import plotly.io as pio
pio.renderers.default='notebook'
#
# How much time total for all users and all activities?
total_time = 0.0
for user in range(37):
for activity in activities:
total_time += activityStepsByUser[user][activity]
print('User Num Jog Walk Stand Sit Upstairs DownSt TotalTime TimePerAct Frac')
summedFrac = 0.0
segmentTimes = []
for user in range(37):
print(user,'\t',activitiesByUser[user],'\t',end='')
total_user = 0.0
for activity in activities:
total_user += activityStepsByUser[user][activity]
if activityStepsByUser[user][activity]> 0.0:
# if activityStepsByUser[user][activity]> 0.0 and activityStepsByUser[user][activity] < 100:
segmentTimes.append(activityStepsByUser[user][activity])
print(round(activityStepsByUser[user][activity],0),'\t',end='')
summedFrac += total_user/total_time
timePerActivity = 0.0
if activitiesByUser[user]>0:
timePerActivity = total_user / float(activitiesByUser[user])
print(round(total_user,0),' ',round(timePerActivity,0),' ',round(summedFrac,2))
fig = px.histogram(segmentTimes, nbins=50, histnorm='probability')
fig.show()
User Num Jog Walk Stand Sit Upstairs DownSt TotalTime TimePerAct Frac 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 12 62.0 0.0 257.0 221.0 59.0 0.0 600.0 50.0 0.03 2 4 0.0 0.0 235.0 236.0 0.0 0.0 470.0 118.0 0.05 3 14 68.0 56.0 259.0 220.0 67.0 32.0 703.0 50.0 0.08 4 8 28.0 0.0 122.0 18.0 35.0 25.0 227.0 28.0 0.09 5 10 68.0 30.0 245.0 128.0 66.0 33.0 570.0 57.0 0.12 6 19 33.0 14.0 248.0 236.0 29.0 34.0 594.0 31.0 0.14 7 14 72.0 47.0 221.0 184.0 45.0 51.0 619.0 44.0 0.17 8 13 89.0 65.0 342.0 206.0 67.0 54.0 824.0 63.0 0.21 9 2 0.0 0.0 258.0 0.0 0.0 0.0 258.0 129.0 0.22 10 13 86.0 33.0 261.0 242.0 76.0 0.0 698.0 54.0 0.25 11 8 88.0 0.0 243.0 249.0 53.0 0.0 633.0 79.0 0.28 12 14 53.0 33.0 216.0 247.0 57.0 46.0 653.0 47.0 0.31 13 12 93.0 33.0 261.0 247.0 85.0 24.0 742.0 62.0 0.35 14 8 164.0 0.0 277.0 266.0 58.0 0.0 764.0 95.0 0.38 15 24 41.0 0.0 231.0 256.0 35.0 0.0 563.0 23.0 0.41 16 5 28.0 40.0 250.0 0.0 32.0 60.0 409.0 82.0 0.43 17 10 114.0 0.0 194.0 58.0 75.0 0.0 440.0 44.0 0.45 18 10 48.0 39.0 251.0 240.0 48.0 29.0 656.0 66.0 0.48 19 15 86.0 43.0 352.0 324.0 52.0 0.0 857.0 57.0 0.51 20 14 97.0 108.0 263.0 259.0 93.0 313.0 1133.0 81.0 0.57 21 14 97.0 57.0 250.0 192.0 81.0 32.0 709.0 51.0 0.6 22 8 109.0 0.0 141.0 124.0 73.0 0.0 446.0 56.0 0.62 23 7 97.0 0.0 132.0 246.0 39.0 0.0 513.0 73.0 0.64 24 14 61.0 11.0 125.0 246.0 59.0 14.0 515.0 37.0 0.67 25 2 0.0 0.0 140.0 130.0 0.0 0.0 269.0 135.0 0.68 26 12 72.0 0.0 264.0 238.0 77.0 0.0 652.0 54.0 0.71 27 14 65.0 33.0 250.0 241.0 69.0 42.0 699.0 50.0 0.74 28 6 58.0 26.0 283.0 0.0 60.0 0.0 427.0 71.0 0.76 29 15 96.0 32.0 248.0 256.0 87.0 46.0 765.0 51.0 0.79 30 9 85.0 62.0 252.0 0.0 77.0 31.0 507.0 56.0 0.82 31 12 94.0 52.0 338.0 282.0 78.0 43.0 886.0 74.0 0.86 32 12 76.0 33.0 248.0 245.0 47.0 61.0 710.0 59.0 0.89 33 18 44.0 32.0 298.0 59.0 91.0 65.0 589.0 33.0 0.92 34 14 78.0 27.0 268.0 257.0 57.0 32.0 719.0 51.0 0.95 35 6 0.0 21.0 143.0 251.0 0.0 32.0 448.0 75.0 0.97 36 12 109.0 38.0 124.0 241.0 83.0 50.0 645.0 54.0 1.0
We might want to keep all of a given user's data in either the test or train sample - this way we use data from different people to predict behavior of new people.
If we want a 80%/20% split, it looks like we can define:
We also need to convert the 'activity' column from text ('Jogging', etc) to a number (1-6) so we can one-hot encode it later.
from sklearn import preprocessing
# Define column name of the label vector
LABEL = 'ActivityEncoded'
# Transform the labels from String to Integer via LabelEncoder
le = preprocessing.LabelEncoder()
# Add a new column to the existing DataFrame with the encoded values
df[LABEL] = le.fit_transform(df['activity'].values.ravel())
print(df.head(5))
# Differentiate between test set and training set
df_test = df[df['user-id'] > 28]
df_train = df[df['user-id'] <= 28]
user-id activity timestamp x-axis y-axis z-axis \ 0 33 Jogging 49105962326000 -0.694638 12.680544 0.503953 1 33 Jogging 49106062271000 5.012288 11.264028 0.953424 2 33 Jogging 49106112167000 4.903325 10.882658 -0.081722 3 33 Jogging 49106222305000 -0.612916 18.496431 3.023717 4 33 Jogging 49106332290000 -1.184970 12.108489 7.205164 ActivityEncoded 0 1 1 1 2 1 3 1 4 1
As usual, we want to normalize our features. We will use training set maximums to do this.
# Normalize features for training data set (values between 0 and 1)
# Surpress warning for next 3 operation
pd.options.mode.chained_assignment = None # default='warn'
max_x = df_train['x-axis'].max()
max_y = df_train['y-axis'].max()
max_z = df_train['z-axis'].max()
print("max values ", max_x,max_y,max_z)
df_train['x-axis'] = df_train['x-axis'] / max_x
df_train['y-axis'] = df_train['y-axis'] / max_y
df_train['z-axis'] = df_train['z-axis'] / max_z
# Round numbers
df_train = df_train.round({'x-axis': 4, 'y-axis': 4, 'z-axis': 4})
df_test['x-axis'] = df_test['x-axis'] / max_x
df_test['y-axis'] = df_test['y-axis'] / max_y
df_test['z-axis'] = df_test['z-axis'] / max_z
# Round numbers
df_test = df_test.round({'x-axis': 4, 'y-axis': 4, 'z-axis': 4})
max values 19.95 20.04 19.61
Let's make some plots to see what the data looks like. We will select out various activites from our primary dataframe df, and see if they make sense:
import pandas as pd
import numpy as np
import plotly.express as px
import plotly.io as pio
pio.renderers.default='notebook'
df_standing = df_train[df_train['activity'] == 'Standing'][:100]
fig = px.line(df_standing,x='timestamp',y=["x-axis","y-axis","z-axis"],hover_data=["user-id"],title='Standing')
fig.show()
df_user = df_train[df_train['user-id'] == 15][180:230]
fig = px.line(df_user,x='timestamp',y=["x-axis","y-axis","z-axis"],title='USER 15')
fig.show()
Make plots for:
df_walking = df_train[df_train['activity'] == 'Walking'][:100]
fig = px.line(df_walking,x='timestamp',y=["x-axis","y-axis","z-axis"],hover_data=["user-id"],title='Walking')
fig.show()
df_jogging = df_train[df_train['activity'] == 'Jogging'][:100]
fig = px.line(df_jogging,x='timestamp',y=["x-axis","y-axis","z-axis"],hover_data=["user-id"],title='Jogging')
fig.show()
df_upstairs = df_train[df_train['activity'] == 'Upstairs'][:100]
fig = px.line(df_upstairs,x='timestamp',y=["x-axis","y-axis","z-axis"],hover_data=["user-id"],title='Upstairs')
fig.show()
df_downstairs = df_train[df_train['activity'] == 'Downstairs'][:100]
fig = px.line(df_downstairs,x='timestamp',y=["x-axis","y-axis","z-axis"],hover_data=["user-id"],title='Downstairs')
fig.show()
df_sitting = df_train[df_train['activity'] == 'Sitting'][:100]
fig = px.line(df_sitting,x='timestamp',y=["x-axis","y-axis","z-axis"],hover_data=["user-id"],title='Sitting')
fig.show()
df_user = df_train[df_train['user-id'] == 24][180:230]
fig = px.line(df_user,x='timestamp',y=["x-axis","y-axis","z-axis"],title='USER 24')
fig.show()
From our above plots and tables, we see that typical times that each user spends in a given activity is tens of seconds or hundreds of seconds. The smallest non-zero time is about 11 seconds, the largest amount of time is just under 350 seconds.
We will define our samples to be 1.6 seconds, and we will assign as the label of each sample the activity that occurs the most in that sample. To help increase the total number of samples, we can also allow some overlap in our samples. We will not allow any overlap in our test data however.
from scipy import stats
# Same labels will be reused throughout the program
LABELS = ['Downstairs',
'Jogging',
'Sitting',
'Standing',
'Upstairs',
'Walking']
# The number of steps within one time segment
TIME_PERIODS = 80 # since there are 50 measurements/sec, this is
# (80 measurement / (50 meas/sec)) = 1.6 seconds of data
# The steps to take from one segment to the next; if this value is equal to
# TIME_PERIODS, then there is no overlap between the segments
STEP_DISTANCE_TRAIN = 40 # for training we overlap to get more data
STEP_DISTANCE_TEST = 80 # for testing we do not overlap
def create_segments_and_labels(df, time_steps, step, label_name):
# x, y, z acceleration as features
N_FEATURES = 3
# Number of steps to advance in each iteration (for me, it should always
# be equal to the time_steps in order to have no overlap between segments)
# step = time_steps
segments = []
labels = []
for i in range(0, len(df) - time_steps, step):
xs = df['x-axis'].values[i: i + time_steps]
ys = df['y-axis'].values[i: i + time_steps]
zs = df['z-axis'].values[i: i + time_steps]
# Retrieve the most often used label in this segment
label = stats.mode(df[label_name][i: i + time_steps])[0][0]
segments.append([xs, ys, zs])
labels.append(label)
# Bring the segments into a better shape
reshaped_segments = np.asarray(segments, dtype= np.float32).reshape(-1, time_steps, N_FEATURES)
labels = np.asarray(labels)
return reshaped_segments, labels
x_train, y_train = create_segments_and_labels(df_train,
TIME_PERIODS,
STEP_DISTANCE_TRAIN,
LABEL)
x_test, y_test = create_segments_and_labels(df_test,
TIME_PERIODS,
STEP_DISTANCE_TEST,
LABEL)
print('x_train shape: ', x_train.shape)
print(x_train.shape[0], 'training samples')
print('y_train shape: ', y_train.shape)
print('x_test shape: ', x_test.shape)
print(x_test.shape[0], 'testing samples')
print('y_test shape: ', y_test.shape)
x_train shape: (20868, 80, 3) 20868 training samples y_train shape: (20868,) x_test shape: (3292, 80, 3) 3292 testing samples y_test shape: (3292,)
Let's begin with something reasonably simple. We have labeled samples, each 80 time steps long, with 3 channels of information at each time step. Let's treat this as $80\times3=240$ total features. We can easily write down a multi-layer network with a softmax output to classify this.
Our network will look like the figure below:
“Deep Neural Network Example” by Nils Ackermann is licensed under Creative Commons CC BY-ND 4.0
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
print(keras.__version__)
2.6.0 2.6.0
# Set input & output dimensions
num_time_periods, num_sensors = x_train.shape[1], x_train.shape[2]
num_classes = le.classes_.size
print(list(le.classes_))
# Reshape the input data
input_shape = (num_time_periods*num_sensors)
#x_train = x_train.reshape(x_train.shape[0], input_shape)
#x_test = x_test.reshape(x_test.shape[0], input_shape)
print('x_train shape:', x_train.shape)
print('input_shape:', input_shape)
# One-hot enocde the output labels
x_train = x_train.astype('float32')
y_train = y_train.astype('float32')
y_train_hot = keras.utils.to_categorical(y_train, num_classes)
print('New y_train shape: ', y_train_hot.shape)
x_test = x_test.astype('float32')
y_test = y_test.astype('float32')
y_test_hot = keras.utils.to_categorical(y_test, num_classes)
print('New y_test shape: ', y_test_hot.shape)
['Downstairs', 'Jogging', 'Sitting', 'Standing', 'Upstairs', 'Walking'] x_train shape: (20868, 80, 3) input_shape: 240 New y_train shape: (20868, 6) New y_test shape: (3292, 6)
model_m = keras.models.Sequential()
model_m.add(keras.layers.Dense(100, activation='relu', input_shape=(80,3)))
model_m.add(keras.layers.Dense(100, activation='relu'))
model_m.add(keras.layers.Dense(100, activation='relu'))
model_m.add(keras.layers.Flatten())
model_m.add(keras.layers.Dense(6, activation='softmax'))
model_m.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
print(model_m.summary())
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 80, 100) 400 _________________________________________________________________ dense_1 (Dense) (None, 80, 100) 10100 _________________________________________________________________ dense_2 (Dense) (None, 80, 100) 10100 _________________________________________________________________ flatten (Flatten) (None, 8000) 0 _________________________________________________________________ dense_3 (Dense) (None, 6) 48006 ================================================================= Total params: 68,606 Trainable params: 68,606 Non-trainable params: 0 _________________________________________________________________ None
Here we use the callbacks option to monitor the validation loss, waiting patience=4 epochs before deciding to stop training. We also save the best model.
patience = 4
callbacks_list = [
keras.callbacks.ModelCheckpoint(
filepath='best_model.har_fcn.h5',
monitor='val_loss', save_best_only=True),
keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience)
]
# Hyper-parameters
BATCH_SIZE = 400
EPOCHS = 50
#
# Why do we do this?
history = model_m.fit(x_train,
y_train_hot,
batch_size=BATCH_SIZE,
epochs=5,
callbacks=[],
validation_data=(x_test, y_test_hot),
verbose=1)
#
# And then this??
# Enable validation to use ModelCheckpoint and EarlyStopping callbacks.
history = model_m.fit(x_train,
y_train_hot,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=callbacks_list,
validation_data=(x_test, y_test_hot),
verbose=1)
best_val_acc = history.history['val_accuracy'][-(patience+1)]
print("Best validation accuracy is:",best_val_acc)
Epoch 1/5 53/53 [==============================] - 7s 114ms/step - loss: 1.0047 - accuracy: 0.6682 - val_loss: 0.8385 - val_accuracy: 0.6947 Epoch 2/5 53/53 [==============================] - 6s 110ms/step - loss: 0.5732 - accuracy: 0.7975 - val_loss: 0.7674 - val_accuracy: 0.7278 Epoch 3/5 53/53 [==============================] - 6s 104ms/step - loss: 0.5252 - accuracy: 0.8140 - val_loss: 0.8514 - val_accuracy: 0.7184 Epoch 4/5 53/53 [==============================] - 6s 108ms/step - loss: 0.5000 - accuracy: 0.8201 - val_loss: 0.8796 - val_accuracy: 0.7284 Epoch 5/5 53/53 [==============================] - 6s 119ms/step - loss: 0.4781 - accuracy: 0.8290 - val_loss: 0.9286 - val_accuracy: 0.7342 Epoch 1/50 53/53 [==============================] - 7s 125ms/step - loss: 0.4641 - accuracy: 0.8319 - val_loss: 0.9557 - val_accuracy: 0.7412 Epoch 2/50 53/53 [==============================] - 6s 117ms/step - loss: 0.4454 - accuracy: 0.8385 - val_loss: 0.9422 - val_accuracy: 0.7564 Epoch 3/50 53/53 [==============================] - 6s 108ms/step - loss: 0.4306 - accuracy: 0.8422 - val_loss: 1.0157 - val_accuracy: 0.7442 Epoch 4/50 53/53 [==============================] - 6s 121ms/step - loss: 0.4152 - accuracy: 0.8475 - val_loss: 1.0017 - val_accuracy: 0.7485 Epoch 5/50 53/53 [==============================] - 6s 117ms/step - loss: 0.3983 - accuracy: 0.8531 - val_loss: 1.1275 - val_accuracy: 0.7208 Epoch 6/50 53/53 [==============================] - 6s 112ms/step - loss: 0.3904 - accuracy: 0.8552 - val_loss: 1.1780 - val_accuracy: 0.7372 Best validation accuracy is: 0.7563791275024414
print(history.history)
best_val_acc = history.history['val_accuracy'][-(patience+1)]
print("Best validation accuracy is:",best_val_acc)
{'loss': [0.4640944004058838, 0.4453620910644531, 0.4306415617465973, 0.41524192690849304, 0.3982602655887604, 0.3903670012950897], 'accuracy': [0.8319436311721802, 0.8384608030319214, 0.8422464728355408, 0.847517728805542, 0.8530764579772949, 0.8551849722862244], 'val_loss': [0.9556666612625122, 0.9422378540039062, 1.015659213066101, 1.0016745328903198, 1.1274818181991577, 1.1780214309692383], 'val_accuracy': [0.7411907911300659, 0.7563791275024414, 0.7442284226417542, 0.7484811544418335, 0.7208383679389954, 0.737241804599762]}
Best validation accuracy is: 0.7563791275024414
Can you figure out why we called model_m.fit twice in the code above? Look at the difference ion the two calls, as well as the printout to see if you can come up with a reason why you might want to do that!
The performance we obtained for a single train/test split was about 71% (evaluated where the validation losss was at a minimum). Can we do better?
We recall that in our study of images, we found that 2D convolution could help - this was because there were features present in the data that were somewhat translationally and rotationally invariant. The convolution process also allows us to discover features, rather than imposing the features from the outside. We can also use the convolution process with sequences. In this case however, we will employ 1D convolution.
First lets define the network using Keras, then we will step through the layers to see if we can understand each of the steps.
#
# Define a sequential model as usual
model_m = keras.models.Sequential()
#
# Our first layer gets the input from our samples - this is 80 time steps by 3 channels
model_m.add(keras.layers.Conv1D(100, 10, activation='relu', input_shape=(80,3)))
#
# Anoth convolutional layer
model_m.add(keras.layers.Conv1D(100, 10, activation='relu'))
#
# Max pooling
model_m.add(keras.layers.MaxPooling1D(3))
#
# Two more convolutional layers
model_m.add(keras.layers.Conv1D(80, 10, activation='relu'))
model_m.add(keras.layers.Conv1D(80, 10, activation='relu'))
#
# Global average pooling use this instead of "Flatten" - it helps reduce overfitting
model_m.add(keras.layers.GlobalAveragePooling1D())
#model_m.add(Flatten())
model_m.add(keras.layers.Dropout(0.5))
model_m.add(keras.layers.Dense(num_classes, activation='softmax'))
print(model_m.summary())
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv1d (Conv1D) (None, 71, 100) 3100 _________________________________________________________________ conv1d_1 (Conv1D) (None, 62, 100) 100100 _________________________________________________________________ max_pooling1d (MaxPooling1D) (None, 20, 100) 0 _________________________________________________________________ conv1d_2 (Conv1D) (None, 11, 80) 80080 _________________________________________________________________ conv1d_3 (Conv1D) (None, 2, 80) 64080 _________________________________________________________________ global_average_pooling1d (Gl (None, 80) 0 _________________________________________________________________ dropout (Dropout) (None, 80) 0 _________________________________________________________________ dense_4 (Dense) (None, 6) 486 ================================================================= Total params: 247,846 Trainable params: 247,846 Non-trainable params: 0 _________________________________________________________________ None
To understand how this works, refer to the figure below. In most respects this sort of network is extremely similar to the 2D concolutinal networks we use for images, though there are some subtle differences.
.
We run the fitter in exactly the same way as we did the FCN above.
patience = 4
callbacks_list = [
keras.callbacks.ModelCheckpoint(
filepath='best_model.har_cnn.h5',
monitor='val_loss', save_best_only=True),
keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience)
]
model_m.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
BATCH_SIZE = 400
EPOCHS = 50
#
# Why do we do this?
history = model_m.fit(x_train,
y_train_hot,
batch_size=BATCH_SIZE,
epochs=5,
callbacks=[],
validation_data=(x_test, y_test_hot),
verbose=1)
#
# And then this??
history = model_m.fit(x_train,
y_train_hot,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=callbacks_list,
validation_data=(x_test, y_test_hot),
verbose=1)
Epoch 1/5 53/53 [==============================] - 18s 331ms/step - loss: 0.9391 - accuracy: 0.6668 - val_loss: 0.7075 - val_accuracy: 0.7467 Epoch 2/5 53/53 [==============================] - 18s 337ms/step - loss: 0.5238 - accuracy: 0.8157 - val_loss: 0.5530 - val_accuracy: 0.7950 Epoch 3/5 53/53 [==============================] - 17s 331ms/step - loss: 0.3964 - accuracy: 0.8566 - val_loss: 0.5154 - val_accuracy: 0.7925 Epoch 4/5 53/53 [==============================] - 17s 323ms/step - loss: 0.3410 - accuracy: 0.8745 - val_loss: 0.5440 - val_accuracy: 0.8208 Epoch 5/5 53/53 [==============================] - 18s 335ms/step - loss: 0.3011 - accuracy: 0.8872 - val_loss: 0.4981 - val_accuracy: 0.8226 Epoch 1/50 53/53 [==============================] - 17s 329ms/step - loss: 0.2642 - accuracy: 0.9032 - val_loss: 0.4988 - val_accuracy: 0.8317 Epoch 2/50 53/53 [==============================] - 17s 325ms/step - loss: 0.2410 - accuracy: 0.9159 - val_loss: 0.4277 - val_accuracy: 0.8609 Epoch 3/50 53/53 [==============================] - 17s 329ms/step - loss: 0.2135 - accuracy: 0.9282 - val_loss: 0.4924 - val_accuracy: 0.8269 Epoch 4/50 53/53 [==============================] - 18s 335ms/step - loss: 0.1943 - accuracy: 0.9378 - val_loss: 0.4855 - val_accuracy: 0.8548 Epoch 5/50 53/53 [==============================] - 17s 319ms/step - loss: 0.1825 - accuracy: 0.9390 - val_loss: 0.4486 - val_accuracy: 0.8587 Epoch 6/50 53/53 [==============================] - 17s 320ms/step - loss: 0.1601 - accuracy: 0.9487 - val_loss: 0.4977 - val_accuracy: 0.8597
best_val_acc = history.history['val_accuracy'][-(patience+1)]
print("Best validation accuracy is:",best_val_acc)
Best validation accuracy is: 0.8608748316764832
We see from above that the 1D convolutional neural network greatly outperforms the standard FCN. Awesome! But if you notice, a key parameter in the network is the kernel size. We can see from our plots above comparing jogging/walking/sittig/climbing stairs, that the time structure of the various activities is different. Meaning that a larger kernal size - that covers more time steps - might make more sense for some activities, while a smaller kernel size might be more appropriate for other activities. Can we combine these in a single network? Yes! Enter the multi-headed network!
The basic idea is simple:
To implement this network, it is necessary to use the Keras Functional API, which we have already introduced. Before we do the multi-headed network, let's do a copy of the above CNN network using the functional API:
#
# Our first layer gets the input from our samples - this is 80 time steps by 3 channels
#model_m.add(Conv1D(50, 10, activation='relu', input_shape=(80,3)))
inputs1 = keras.layers.Input(shape=(80,3))
conv1 = keras.layers.Conv1D(50, 10, activation='relu')(inputs1)
#
# Anoth convolutional layer
#model_m.add(Conv1D(50, 10, activation='relu'))
conv2 = keras.layers.Conv1D(50, 10, activation='relu')(conv1)
#
# Max pooling
#model_m.add(MaxPooling1D(3))
pool1 = keras.layers.MaxPooling1D(3)(conv2)
#
# Two more convolutional layers
#model_m.add(Conv1D(80, 10, activation='relu'))
#model_m.add(Conv1D(80, 10, activation='relu'))
conv3 = keras.layers.Conv1D(80, 10, activation='relu')(pool1)
conv4 = keras.layers.Conv1D(80, 10, activation='relu')(conv3)
#
# Global average pooling use this instead of "Flatten" - it helps reduce overfitting
#model_m.add(GlobalAveragePooling1D())
glob1 = keras.layers.GlobalAveragePooling1D()(conv4)
#
drop1 = keras.layers.Dropout(0.5)(glob1)
outputs = keras.layers.Dense(num_classes, activation='softmax')(drop1)
#
# Now define the model
model_m = keras.models.Model(inputs=inputs1, outputs=outputs)
print(model_m.summary())
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 80, 3)] 0 _________________________________________________________________ conv1d_4 (Conv1D) (None, 71, 50) 1550 _________________________________________________________________ conv1d_5 (Conv1D) (None, 62, 50) 25050 _________________________________________________________________ max_pooling1d_1 (MaxPooling1 (None, 20, 50) 0 _________________________________________________________________ conv1d_6 (Conv1D) (None, 11, 80) 40080 _________________________________________________________________ conv1d_7 (Conv1D) (None, 2, 80) 64080 _________________________________________________________________ global_average_pooling1d_1 ( (None, 80) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 80) 0 _________________________________________________________________ dense_5 (Dense) (None, 6) 486 ================================================================= Total params: 131,246 Trainable params: 131,246 Non-trainable params: 0 _________________________________________________________________ None
patience = 4
callbacks_list = [
keras.callbacks.ModelCheckpoint(
filepath='best_model.har_cnn.h5',
monitor='val_loss', save_best_only=True),
keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience)
]
model_m.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
BATCH_SIZE = 400
EPOCHS = 50
history = model_m.fit(x_train,
y_train_hot,
batch_size=BATCH_SIZE,
epochs=5,
callbacks=[],
validation_data=(x_test, y_test_hot),
verbose=1)
history = model_m.fit(x_train,
y_train_hot,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=callbacks_list,
validation_data=(x_test, y_test_hot),
verbose=1)
Epoch 1/5 53/53 [==============================] - 8s 139ms/step - loss: 0.9392 - accuracy: 0.6702 - val_loss: 0.7173 - val_accuracy: 0.7594 Epoch 2/5 53/53 [==============================] - 7s 135ms/step - loss: 0.5404 - accuracy: 0.8072 - val_loss: 0.5551 - val_accuracy: 0.7989 Epoch 3/5 53/53 [==============================] - 7s 125ms/step - loss: 0.4224 - accuracy: 0.8489 - val_loss: 0.5630 - val_accuracy: 0.7953 Epoch 4/5 53/53 [==============================] - 7s 123ms/step - loss: 0.3854 - accuracy: 0.8619 - val_loss: 0.5851 - val_accuracy: 0.7889 Epoch 5/5 53/53 [==============================] - 7s 131ms/step - loss: 0.3280 - accuracy: 0.8805 - val_loss: 0.5646 - val_accuracy: 0.8065 Epoch 1/50 53/53 [==============================] - 7s 131ms/step - loss: 0.2988 - accuracy: 0.8902 - val_loss: 0.5658 - val_accuracy: 0.8126 Epoch 2/50 53/53 [==============================] - 7s 131ms/step - loss: 0.2794 - accuracy: 0.8960 - val_loss: 0.5642 - val_accuracy: 0.8177 Epoch 3/50 53/53 [==============================] - 7s 131ms/step - loss: 0.2694 - accuracy: 0.9015 - val_loss: 0.5514 - val_accuracy: 0.8153 Epoch 4/50 53/53 [==============================] - 7s 139ms/step - loss: 0.2576 - accuracy: 0.9055 - val_loss: 0.6014 - val_accuracy: 0.8165 Epoch 5/50 53/53 [==============================] - 7s 133ms/step - loss: 0.2272 - accuracy: 0.9167 - val_loss: 0.5091 - val_accuracy: 0.8566 Epoch 6/50 53/53 [==============================] - 7s 133ms/step - loss: 0.2101 - accuracy: 0.9286 - val_loss: 0.5854 - val_accuracy: 0.8527 Epoch 7/50 53/53 [==============================] - 7s 131ms/step - loss: 0.1985 - accuracy: 0.9335 - val_loss: 0.5645 - val_accuracy: 0.8505 Epoch 8/50 53/53 [==============================] - 7s 129ms/step - loss: 0.1795 - accuracy: 0.9404 - val_loss: 0.6533 - val_accuracy: 0.8338 Epoch 9/50 53/53 [==============================] - 7s 133ms/step - loss: 0.1757 - accuracy: 0.9407 - val_loss: 0.6618 - val_accuracy: 0.8408
best_val_acc = history.history['val_accuracy'][-(patience+1)]
print("Best validation accuracy is:",best_val_acc)
Best validation accuracy is: 0.8566220998764038
To make this network we remove the 2 convolutional layers at the end of the model - just to cut down on the total number of trainable parameters (to help reduce overfitting). The structure of the two "heads" is exactly the same, except for the kernel size. It is not necessary that the two networks be so similar - this is just done for convenience.
We then merge the two heads using a concatenate layer, and send the output of that through a softmax to get our final output. There are alot of choices that can be made here: kernel sizes, number of kernels, number of convolutional layers, number of heads, amount of dropout, etc.
#
#
inputs1 = keras.layers.Input(shape=(80,3))
h1conv1 = keras.layers.Conv1D(filters=100, kernel_size=10, activation='relu')(inputs1)
h1conv2 = keras.layers.Conv1D(filters=50, kernel_size=10, activation='relu')(h1conv1)
h1pool1 = keras.layers.MaxPooling1D(3)(h1conv2)
h1glob1 = keras.layers.GlobalAveragePooling1D()(h1pool1)
h1drop1 = keras.layers.Dropout(0.5)(h1glob1)
#
inputs2 = keras.layers.Input(shape=(80,3))
h2conv1 = keras.layers.Conv1D(filters=100, kernel_size=20, activation='relu')(inputs2)
h2conv2 = keras.layers.Conv1D(filters=50, kernel_size=20, activation='relu')(h2conv1)
h2pool1 = keras.layers.MaxPooling1D(3)(h2conv2)
h2glob1 = keras.layers.GlobalAveragePooling1D()(h2pool1)
h2drop1 = keras.layers.Dropout(0.5)(h2glob1)
#
# Concatentate the output of the above two branches
merged = keras.layers.concatenate([h1drop1,h2drop1])
dense1 = keras.layers.Dense(100, activation='relu')(merged)
outputs = keras.layers.Dense(6, activation='softmax')(dense1)
#
# Now define the model
model_m = keras.models.Model(inputs=[inputs1, inputs2], outputs=outputs)
print(model_m.summary())
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) [(None, 80, 3)] 0
__________________________________________________________________________________________________
input_3 (InputLayer) [(None, 80, 3)] 0
__________________________________________________________________________________________________
conv1d_8 (Conv1D) (None, 71, 100) 3100 input_2[0][0]
__________________________________________________________________________________________________
conv1d_10 (Conv1D) (None, 61, 100) 6100 input_3[0][0]
__________________________________________________________________________________________________
conv1d_9 (Conv1D) (None, 62, 50) 50050 conv1d_8[0][0]
__________________________________________________________________________________________________
conv1d_11 (Conv1D) (None, 42, 50) 100050 conv1d_10[0][0]
__________________________________________________________________________________________________
max_pooling1d_2 (MaxPooling1D) (None, 20, 50) 0 conv1d_9[0][0]
__________________________________________________________________________________________________
max_pooling1d_3 (MaxPooling1D) (None, 14, 50) 0 conv1d_11[0][0]
__________________________________________________________________________________________________
global_average_pooling1d_2 (Glo (None, 50) 0 max_pooling1d_2[0][0]
__________________________________________________________________________________________________
global_average_pooling1d_3 (Glo (None, 50) 0 max_pooling1d_3[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 50) 0 global_average_pooling1d_2[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 50) 0 global_average_pooling1d_3[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 100) 0 dropout_2[0][0]
dropout_3[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (None, 100) 10100 concatenate[0][0]
__________________________________________________________________________________________________
dense_7 (Dense) (None, 6) 606 dense_6[0][0]
==================================================================================================
Total params: 170,006
Trainable params: 170,006
Non-trainable params: 0
__________________________________________________________________________________________________
None
patience = 4
callbacks_list = [
keras.callbacks.ModelCheckpoint(
filepath='best_model.har_cnn.h5',
monitor='val_loss', save_best_only=True),
keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience)
]
model_m.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
BATCH_SIZE = 400
EPOCHS = 50
history = model_m.fit([x_train,x_train],
y_train_hot,
batch_size=BATCH_SIZE,
epochs=5,
callbacks=[],
validation_data=([x_test,x_test], y_test_hot),
verbose=1)
history = model_m.fit([x_train,x_train],
y_train_hot,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=callbacks_list,
validation_data=([x_test,x_test], y_test_hot),
verbose=1)
print("Validation accuracies by epoch ",history.history["val_accuracy"])
Epoch 1/5 53/53 [==============================] - 24s 448ms/step - loss: 1.0581 - accuracy: 0.6336 - val_loss: 0.8465 - val_accuracy: 0.6476 Epoch 2/5 53/53 [==============================] - 23s 442ms/step - loss: 0.7080 - accuracy: 0.7172 - val_loss: 0.7005 - val_accuracy: 0.7342 Epoch 3/5 53/53 [==============================] - 22s 419ms/step - loss: 0.5975 - accuracy: 0.7729 - val_loss: 0.6205 - val_accuracy: 0.7980 Epoch 4/5 53/53 [==============================] - 23s 433ms/step - loss: 0.5429 - accuracy: 0.8009 - val_loss: 0.6093 - val_accuracy: 0.7840 Epoch 5/5 53/53 [==============================] - 24s 448ms/step - loss: 0.4992 - accuracy: 0.8171 - val_loss: 0.5724 - val_accuracy: 0.8156 Epoch 1/50 53/53 [==============================] - 22s 423ms/step - loss: 0.4803 - accuracy: 0.8249 - val_loss: 0.5413 - val_accuracy: 0.8132 Epoch 2/50 53/53 [==============================] - 22s 421ms/step - loss: 0.4457 - accuracy: 0.8341 - val_loss: 0.5437 - val_accuracy: 0.7971 Epoch 3/50 53/53 [==============================] - 23s 437ms/step - loss: 0.4217 - accuracy: 0.8430 - val_loss: 0.5033 - val_accuracy: 0.8187 Epoch 4/50 53/53 [==============================] - 24s 450ms/step - loss: 0.3978 - accuracy: 0.8547 - val_loss: 0.4830 - val_accuracy: 0.8187 Epoch 5/50 53/53 [==============================] - 23s 442ms/step - loss: 0.3857 - accuracy: 0.8590 - val_loss: 0.4644 - val_accuracy: 0.8226 Epoch 6/50 53/53 [==============================] - 23s 429ms/step - loss: 0.3640 - accuracy: 0.8636 - val_loss: 0.4445 - val_accuracy: 0.8363 Epoch 7/50 53/53 [==============================] - 23s 436ms/step - loss: 0.3421 - accuracy: 0.8735 - val_loss: 0.4729 - val_accuracy: 0.8253 Epoch 8/50 53/53 [==============================] - 23s 440ms/step - loss: 0.3279 - accuracy: 0.8805 - val_loss: 0.4434 - val_accuracy: 0.8320 Epoch 9/50 53/53 [==============================] - 23s 442ms/step - loss: 0.3115 - accuracy: 0.8824 - val_loss: 0.4275 - val_accuracy: 0.8472 Epoch 10/50 53/53 [==============================] - 24s 454ms/step - loss: 0.2934 - accuracy: 0.8913 - val_loss: 0.3947 - val_accuracy: 0.8606 Epoch 11/50 53/53 [==============================] - 24s 446ms/step - loss: 0.2778 - accuracy: 0.9012 - val_loss: 0.3926 - val_accuracy: 0.8682 Epoch 12/50 53/53 [==============================] - 24s 446ms/step - loss: 0.2705 - accuracy: 0.9009 - val_loss: 0.3805 - val_accuracy: 0.8742 Epoch 13/50 53/53 [==============================] - 24s 448ms/step - loss: 0.2539 - accuracy: 0.9106 - val_loss: 0.4400 - val_accuracy: 0.8581 Epoch 14/50 53/53 [==============================] - 23s 440ms/step - loss: 0.2407 - accuracy: 0.9158 - val_loss: 0.4266 - val_accuracy: 0.8600 Epoch 15/50 53/53 [==============================] - 24s 446ms/step - loss: 0.2335 - accuracy: 0.9188 - val_loss: 0.3878 - val_accuracy: 0.8821 Epoch 16/50 53/53 [==============================] - 23s 440ms/step - loss: 0.2195 - accuracy: 0.9240 - val_loss: 0.3733 - val_accuracy: 0.8913 Epoch 17/50 53/53 [==============================] - 23s 437ms/step - loss: 0.2121 - accuracy: 0.9257 - val_loss: 0.3652 - val_accuracy: 0.8922 Epoch 18/50 53/53 [==============================] - 23s 433ms/step - loss: 0.2187 - accuracy: 0.9241 - val_loss: 0.3890 - val_accuracy: 0.8691 Epoch 19/50 53/53 [==============================] - 23s 435ms/step - loss: 0.2058 - accuracy: 0.9277 - val_loss: 0.3836 - val_accuracy: 0.8891 Epoch 20/50 53/53 [==============================] - 23s 433ms/step - loss: 0.2003 - accuracy: 0.9309 - val_loss: 0.3548 - val_accuracy: 0.8931 Epoch 21/50 53/53 [==============================] - 22s 417ms/step - loss: 0.1923 - accuracy: 0.9343 - val_loss: 0.3649 - val_accuracy: 0.9052 Epoch 22/50 53/53 [==============================] - 24s 444ms/step - loss: 0.1843 - accuracy: 0.9369 - val_loss: 0.3440 - val_accuracy: 0.9034 Epoch 23/50 53/53 [==============================] - 23s 442ms/step - loss: 0.1836 - accuracy: 0.9374 - val_loss: 0.3952 - val_accuracy: 0.8806 Epoch 24/50 53/53 [==============================] - 23s 427ms/step - loss: 0.1789 - accuracy: 0.9374 - val_loss: 0.3509 - val_accuracy: 0.9016 Epoch 25/50 53/53 [==============================] - 23s 439ms/step - loss: 0.1772 - accuracy: 0.9388 - val_loss: 0.3470 - val_accuracy: 0.9019 Epoch 26/50 53/53 [==============================] - 23s 427ms/step - loss: 0.1729 - accuracy: 0.9411 - val_loss: 0.3149 - val_accuracy: 0.9046 Epoch 27/50 53/53 [==============================] - 23s 429ms/step - loss: 0.1740 - accuracy: 0.9420 - val_loss: 0.3397 - val_accuracy: 0.9074 Epoch 28/50 53/53 [==============================] - 23s 440ms/step - loss: 0.1652 - accuracy: 0.9443 - val_loss: 0.3623 - val_accuracy: 0.8937 Epoch 29/50 53/53 [==============================] - 23s 433ms/step - loss: 0.1614 - accuracy: 0.9446 - val_loss: 0.3340 - val_accuracy: 0.9116 Epoch 30/50 53/53 [==============================] - 24s 444ms/step - loss: 0.1563 - accuracy: 0.9464 - val_loss: 0.3619 - val_accuracy: 0.9034 Validation accuracies by epoch [0.8131834864616394, 0.797083854675293, 0.8186512589454651, 0.8186512589454651, 0.822600245475769, 0.836269736289978, 0.8253341317176819, 0.8320170044898987, 0.8472053408622742, 0.8605710864067078, 0.8681652545928955, 0.8742405772209167, 0.8581409454345703, 0.8599635362625122, 0.8821384906768799, 0.8912515044212341, 0.8921627998352051, 0.8690765500068665, 0.8891251683235168, 0.893074095249176, 0.9052248001098633, 0.9034022092819214, 0.8806197047233582, 0.9015795588493347, 0.9018833637237549, 0.9046172499656677, 0.9073511362075806, 0.8936816453933716, 0.9116038680076599, 0.9034022092819214]
print("Validation accuracies by epoch ",history.history["val_accuracy"])
Validation accuracies by epoch [0.8131834864616394, 0.797083854675293, 0.8186512589454651, 0.8186512589454651, 0.822600245475769, 0.836269736289978, 0.8253341317176819, 0.8320170044898987, 0.8472053408622742, 0.8605710864067078, 0.8681652545928955, 0.8742405772209167, 0.8581409454345703, 0.8599635362625122, 0.8821384906768799, 0.8912515044212341, 0.8921627998352051, 0.8690765500068665, 0.8891251683235168, 0.893074095249176, 0.9052248001098633, 0.9034022092819214, 0.8806197047233582, 0.9015795588493347, 0.9018833637237549, 0.9046172499656677, 0.9073511362075806, 0.8936816453933716, 0.9116038680076599, 0.9034022092819214]
best_val_acc = history.history['val_accuracy'][-(patience+1)]
print("Best validation accuracy is:",best_val_acc)
Best validation accuracy is: 0.9046172499656677
Design a 3-head network. Try 3 different kernel sizes for the 3rd head: 5, 15, 25 (and using the same kernel size for eahc of the 2 conv1D layers as we do for the first 2 heads above). Which is best?
It would be a good idea (but is not required) to do this in a loop, varying the kernel size of the 3rd head.
#
#
inputs1 = keras.layers.Input(shape=(80,3))
h1conv1 = keras.layers.Conv1D(filters=100, kernel_size=10, activation='relu')(inputs1)
h1conv2 = keras.layers.Conv1D(filters=50, kernel_size=10, activation='relu')(h1conv1)
h1pool1 = keras.layers.MaxPooling1D(3)(h1conv2)
h1glob1 = keras.layers.GlobalAveragePooling1D()(h1pool1)
h1drop1 = keras.layers.Dropout(0.5)(h1glob1)
#
inputs2 = keras.layers.Input(shape=(80,3))
h2conv1 = keras.layers.Conv1D(filters=100, kernel_size=20, activation='relu')(inputs2)
h2conv2 = keras.layers.Conv1D(filters=50, kernel_size=20, activation='relu')(h2conv1)
h2pool1 = keras.layers.MaxPooling1D(3)(h2conv2)
h2glob1 = keras.layers.GlobalAveragePooling1D()(h2pool1)
h2drop1 = keras.layers.Dropout(0.5)(h2glob1)
#
inputs3 = keras.layers.Input(shape=(80,3))
h3conv1 = keras.layers.Conv1D(filters=100, kernel_size=5, activation='relu')(inputs3)
h3conv2 = keras.layers.Conv1D(filters=50, kernel_size=5, activation='relu')(h3conv1)
h3pool1 = keras.layers.MaxPooling1D(3)(h3conv2)
h3glob1 = keras.layers.GlobalAveragePooling1D()(h3pool1)
h3drop1 = keras.layers.Dropout(0.5)(h3glob1)
#
# Concatentate the output of the above two branches
merged = keras.layers.concatenate([h1drop1,h2drop1,h3drop1])
dense1 = keras.layers.Dense(100, activation='relu')(merged)
outputs = keras.layers.Dense(6, activation='softmax')(dense1)
#
# Now define the model
model_m = keras.models.Model(inputs=[inputs1, inputs2, inputs3], outputs=outputs)
print(model_m.summary())
Model: "model_7"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_19 (InputLayer) [(None, 80, 3)] 0
__________________________________________________________________________________________________
input_20 (InputLayer) [(None, 80, 3)] 0
__________________________________________________________________________________________________
input_21 (InputLayer) [(None, 80, 3)] 0
__________________________________________________________________________________________________
conv1d_42 (Conv1D) (None, 71, 100) 3100 input_19[0][0]
__________________________________________________________________________________________________
conv1d_44 (Conv1D) (None, 61, 100) 6100 input_20[0][0]
__________________________________________________________________________________________________
conv1d_46 (Conv1D) (None, 76, 100) 1600 input_21[0][0]
__________________________________________________________________________________________________
conv1d_43 (Conv1D) (None, 62, 50) 50050 conv1d_42[0][0]
__________________________________________________________________________________________________
conv1d_45 (Conv1D) (None, 42, 50) 100050 conv1d_44[0][0]
__________________________________________________________________________________________________
conv1d_47 (Conv1D) (None, 72, 50) 25050 conv1d_46[0][0]
__________________________________________________________________________________________________
max_pooling1d_19 (MaxPooling1D) (None, 20, 50) 0 conv1d_43[0][0]
__________________________________________________________________________________________________
max_pooling1d_20 (MaxPooling1D) (None, 14, 50) 0 conv1d_45[0][0]
__________________________________________________________________________________________________
max_pooling1d_21 (MaxPooling1D) (None, 24, 50) 0 conv1d_47[0][0]
__________________________________________________________________________________________________
global_average_pooling1d_19 (Gl (None, 50) 0 max_pooling1d_19[0][0]
__________________________________________________________________________________________________
global_average_pooling1d_20 (Gl (None, 50) 0 max_pooling1d_20[0][0]
__________________________________________________________________________________________________
global_average_pooling1d_21 (Gl (None, 50) 0 max_pooling1d_21[0][0]
__________________________________________________________________________________________________
dropout_19 (Dropout) (None, 50) 0 global_average_pooling1d_19[0][0]
__________________________________________________________________________________________________
dropout_20 (Dropout) (None, 50) 0 global_average_pooling1d_20[0][0]
__________________________________________________________________________________________________
dropout_21 (Dropout) (None, 50) 0 global_average_pooling1d_21[0][0]
__________________________________________________________________________________________________
concatenate_6 (Concatenate) (None, 150) 0 dropout_19[0][0]
dropout_20[0][0]
dropout_21[0][0]
__________________________________________________________________________________________________
dense_18 (Dense) (None, 100) 15100 concatenate_6[0][0]
__________________________________________________________________________________________________
dense_19 (Dense) (None, 6) 606 dense_18[0][0]
==================================================================================================
Total params: 201,656
Trainable params: 201,656
Non-trainable params: 0
__________________________________________________________________________________________________
None
patience = 4
callbacks_list = [
keras.callbacks.ModelCheckpoint(
filepath='best_model.har_cnn.h5',
monitor='val_loss', save_best_only=True),
keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience)
]
model_m.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
BATCH_SIZE = 400
EPOCHS = 50
history = model_m.fit([x_train,x_train,x_train],
y_train_hot,
batch_size=BATCH_SIZE,
epochs=5,
callbacks=[],
validation_data=([x_test,x_test,x_test], y_test_hot),
verbose=1)
history = model_m.fit([x_train,x_train,x_train],
y_train_hot,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=callbacks_list,
validation_data=([x_test,x_test,x_test], y_test_hot),
verbose=1)
print("Validation accuracies by epoch ",history.history["val_accuracy"])
best_val_acc = history.history['val_accuracy'][-(patience+1)]
print("Best validation accuracy is:",best_val_acc)
Epoch 1/5 53/53 [==============================] - 32s 587ms/step - loss: 1.0279 - accuracy: 0.6434 - val_loss: 0.8029 - val_accuracy: 0.6473 Epoch 2/5 53/53 [==============================] - 31s 590ms/step - loss: 0.6492 - accuracy: 0.7618 - val_loss: 0.5951 - val_accuracy: 0.7943 Epoch 3/5 53/53 [==============================] - 31s 584ms/step - loss: 0.5320 - accuracy: 0.8070 - val_loss: 0.5480 - val_accuracy: 0.8086 Epoch 4/5 53/53 [==============================] - 31s 587ms/step - loss: 0.4819 - accuracy: 0.8232 - val_loss: 0.5673 - val_accuracy: 0.8074 Epoch 5/5 53/53 [==============================] - 31s 590ms/step - loss: 0.4512 - accuracy: 0.8354 - val_loss: 0.5207 - val_accuracy: 0.8147 Epoch 1/50 53/53 [==============================] - 31s 577ms/step - loss: 0.4243 - accuracy: 0.8427 - val_loss: 0.4894 - val_accuracy: 0.8229 Epoch 2/50 53/53 [==============================] - 31s 579ms/step - loss: 0.3953 - accuracy: 0.8542 - val_loss: 0.4745 - val_accuracy: 0.8293 Epoch 3/50 53/53 [==============================] - 31s 581ms/step - loss: 0.3709 - accuracy: 0.8624 - val_loss: 0.4628 - val_accuracy: 0.8329 Epoch 4/50 53/53 [==============================] - 31s 585ms/step - loss: 0.3593 - accuracy: 0.8677 - val_loss: 0.4380 - val_accuracy: 0.8417 Epoch 5/50 53/53 [==============================] - 31s 590ms/step - loss: 0.3334 - accuracy: 0.8782 - val_loss: 0.4294 - val_accuracy: 0.8423 Epoch 6/50 53/53 [==============================] - 29s 544ms/step - loss: 0.3119 - accuracy: 0.8858 - val_loss: 0.4491 - val_accuracy: 0.8448 Epoch 7/50 53/53 [==============================] - 30s 571ms/step - loss: 0.2985 - accuracy: 0.8923 - val_loss: 0.3690 - val_accuracy: 0.8712 Epoch 8/50 53/53 [==============================] - 30s 577ms/step - loss: 0.2761 - accuracy: 0.9035 - val_loss: 0.3640 - val_accuracy: 0.8843 Epoch 9/50 53/53 [==============================] - 30s 569ms/step - loss: 0.2632 - accuracy: 0.9063 - val_loss: 0.4022 - val_accuracy: 0.8645 Epoch 10/50 53/53 [==============================] - 30s 565ms/step - loss: 0.2496 - accuracy: 0.9132 - val_loss: 0.3484 - val_accuracy: 0.9022 Epoch 11/50 53/53 [==============================] - 31s 585ms/step - loss: 0.2420 - accuracy: 0.9152 - val_loss: 0.3532 - val_accuracy: 0.8894 Epoch 12/50 53/53 [==============================] - 31s 590ms/step - loss: 0.2266 - accuracy: 0.9205 - val_loss: 0.3354 - val_accuracy: 0.8973 Epoch 13/50 53/53 [==============================] - 30s 571ms/step - loss: 0.2149 - accuracy: 0.9271 - val_loss: 0.3319 - val_accuracy: 0.8906 Epoch 14/50 53/53 [==============================] - 30s 577ms/step - loss: 0.2038 - accuracy: 0.9284 - val_loss: 0.3382 - val_accuracy: 0.8998 Epoch 15/50 53/53 [==============================] - 31s 581ms/step - loss: 0.2011 - accuracy: 0.9316 - val_loss: 0.3427 - val_accuracy: 0.8952 Epoch 16/50 53/53 [==============================] - 30s 568ms/step - loss: 0.1916 - accuracy: 0.9343 - val_loss: 0.3486 - val_accuracy: 0.8958 Epoch 17/50 53/53 [==============================] - 31s 588ms/step - loss: 0.1889 - accuracy: 0.9358 - val_loss: 0.3171 - val_accuracy: 0.8925 Epoch 18/50 53/53 [==============================] - 30s 558ms/step - loss: 0.1778 - accuracy: 0.9392 - val_loss: 0.3032 - val_accuracy: 0.9070 Epoch 19/50 53/53 [==============================] - 30s 562ms/step - loss: 0.1790 - accuracy: 0.9417 - val_loss: 0.3096 - val_accuracy: 0.8976 Epoch 20/50 53/53 [==============================] - 31s 592ms/step - loss: 0.1752 - accuracy: 0.9401 - val_loss: 0.3019 - val_accuracy: 0.9040 Epoch 21/50 53/53 [==============================] - 30s 573ms/step - loss: 0.1661 - accuracy: 0.9443 - val_loss: 0.2937 - val_accuracy: 0.9037 Epoch 22/50 53/53 [==============================] - 31s 583ms/step - loss: 0.1569 - accuracy: 0.9484 - val_loss: 0.3185 - val_accuracy: 0.8952 Epoch 23/50 53/53 [==============================] - 30s 562ms/step - loss: 0.1597 - accuracy: 0.9452 - val_loss: 0.2866 - val_accuracy: 0.9101 Epoch 24/50 53/53 [==============================] - 31s 585ms/step - loss: 0.1466 - accuracy: 0.9513 - val_loss: 0.3173 - val_accuracy: 0.9061 Epoch 25/50 53/53 [==============================] - 31s 590ms/step - loss: 0.1497 - accuracy: 0.9508 - val_loss: 0.3230 - val_accuracy: 0.9128 Epoch 26/50 53/53 [==============================] - 30s 562ms/step - loss: 0.1498 - accuracy: 0.9491 - val_loss: 0.2935 - val_accuracy: 0.9125 Epoch 27/50 53/53 [==============================] - 31s 579ms/step - loss: 0.1417 - accuracy: 0.9528 - val_loss: 0.3201 - val_accuracy: 0.9040 Validation accuracies by epoch [0.8229039907455444, 0.8292831182479858, 0.8329282999038696, 0.8417375683784485, 0.8423450589179993, 0.8447751998901367, 0.8712029457092285, 0.8842648863792419, 0.8645200729370117, 0.9021871089935303, 0.8894289135932922, 0.8973268270492554, 0.8906440138816833, 0.8997569680213928, 0.8952004909515381, 0.8958080410957336, 0.8924666047096252, 0.9070473909378052, 0.8976306319236755, 0.9040096998214722, 0.9037059545516968, 0.8952004909515381, 0.9100850820541382, 0.9061360955238342, 0.912818968296051, 0.9125151634216309, 0.9040096998214722] Best validation accuracy is: 0.9100850820541382
#
#
inputs1 = keras.layers.Input(shape=(80,3))
h1conv1 = keras.layers.Conv1D(filters=100, kernel_size=10, activation='relu')(inputs1)
h1conv2 = keras.layers.Conv1D(filters=50, kernel_size=10, activation='relu')(h1conv1)
h1pool1 = keras.layers.MaxPooling1D(3)(h1conv2)
h1glob1 = keras.layers.GlobalAveragePooling1D()(h1pool1)
h1drop1 = keras.layers.Dropout(0.5)(h1glob1)
#
inputs2 = keras.layers.Input(shape=(80,3))
h2conv1 = keras.layers.Conv1D(filters=100, kernel_size=20, activation='relu')(inputs2)
h2conv2 = keras.layers.Conv1D(filters=50, kernel_size=20, activation='relu')(h2conv1)
h2pool1 = keras.layers.MaxPooling1D(3)(h2conv2)
h2glob1 = keras.layers.GlobalAveragePooling1D()(h2pool1)
h2drop1 = keras.layers.Dropout(0.5)(h2glob1)
#
inputs3 = keras.layers.Input(shape=(80,3))
h3conv1 = keras.layers.Conv1D(filters=100, kernel_size=15, activation='relu')(inputs3)
h3conv2 = keras.layers.Conv1D(filters=50, kernel_size=15, activation='relu')(h3conv1)
h3pool1 = keras.layers.MaxPooling1D(3)(h3conv2)
h3glob1 = keras.layers.GlobalAveragePooling1D()(h3pool1)
h3drop1 = keras.layers.Dropout(0.5)(h3glob1)
#
# Concatentate the output of the above two branches
merged = keras.layers.concatenate([h1drop1,h2drop1,h3drop1])
dense1 = keras.layers.Dense(100, activation='relu')(merged)
outputs = keras.layers.Dense(6, activation='softmax')(dense1)
#
# Now define the model
model_m = keras.models.Model(inputs=[inputs1, inputs2, inputs3], outputs=outputs)
print(model_m.summary())
Model: "model_8"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_22 (InputLayer) [(None, 80, 3)] 0
__________________________________________________________________________________________________
input_23 (InputLayer) [(None, 80, 3)] 0
__________________________________________________________________________________________________
input_24 (InputLayer) [(None, 80, 3)] 0
__________________________________________________________________________________________________
conv1d_48 (Conv1D) (None, 71, 100) 3100 input_22[0][0]
__________________________________________________________________________________________________
conv1d_50 (Conv1D) (None, 61, 100) 6100 input_23[0][0]
__________________________________________________________________________________________________
conv1d_52 (Conv1D) (None, 66, 100) 4600 input_24[0][0]
__________________________________________________________________________________________________
conv1d_49 (Conv1D) (None, 62, 50) 50050 conv1d_48[0][0]
__________________________________________________________________________________________________
conv1d_51 (Conv1D) (None, 42, 50) 100050 conv1d_50[0][0]
__________________________________________________________________________________________________
conv1d_53 (Conv1D) (None, 52, 50) 75050 conv1d_52[0][0]
__________________________________________________________________________________________________
max_pooling1d_22 (MaxPooling1D) (None, 20, 50) 0 conv1d_49[0][0]
__________________________________________________________________________________________________
max_pooling1d_23 (MaxPooling1D) (None, 14, 50) 0 conv1d_51[0][0]
__________________________________________________________________________________________________
max_pooling1d_24 (MaxPooling1D) (None, 17, 50) 0 conv1d_53[0][0]
__________________________________________________________________________________________________
global_average_pooling1d_22 (Gl (None, 50) 0 max_pooling1d_22[0][0]
__________________________________________________________________________________________________
global_average_pooling1d_23 (Gl (None, 50) 0 max_pooling1d_23[0][0]
__________________________________________________________________________________________________
global_average_pooling1d_24 (Gl (None, 50) 0 max_pooling1d_24[0][0]
__________________________________________________________________________________________________
dropout_22 (Dropout) (None, 50) 0 global_average_pooling1d_22[0][0]
__________________________________________________________________________________________________
dropout_23 (Dropout) (None, 50) 0 global_average_pooling1d_23[0][0]
__________________________________________________________________________________________________
dropout_24 (Dropout) (None, 50) 0 global_average_pooling1d_24[0][0]
__________________________________________________________________________________________________
concatenate_7 (Concatenate) (None, 150) 0 dropout_22[0][0]
dropout_23[0][0]
dropout_24[0][0]
__________________________________________________________________________________________________
dense_20 (Dense) (None, 100) 15100 concatenate_7[0][0]
__________________________________________________________________________________________________
dense_21 (Dense) (None, 6) 606 dense_20[0][0]
==================================================================================================
Total params: 254,656
Trainable params: 254,656
Non-trainable params: 0
__________________________________________________________________________________________________
None
patience = 4
callbacks_list = [
keras.callbacks.ModelCheckpoint(
filepath='best_model.har_cnn.h5',
monitor='val_loss', save_best_only=True),
keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience)
]
model_m.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
BATCH_SIZE = 400
EPOCHS = 50
history = model_m.fit([x_train,x_train,x_train],
y_train_hot,
batch_size=BATCH_SIZE,
epochs=5,
callbacks=[],
validation_data=([x_test,x_test,x_test], y_test_hot),
verbose=1)
history = model_m.fit([x_train,x_train,x_train],
y_train_hot,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=callbacks_list,
validation_data=([x_test,x_test,x_test], y_test_hot),
verbose=1)
print("Validation accuracies by epoch ",history.history["val_accuracy"])
best_val_acc = history.history['val_accuracy'][-(patience+1)]
print("Best validation accuracy is:",best_val_acc)
Epoch 1/5 53/53 [==============================] - 38s 708ms/step - loss: 0.9894 - accuracy: 0.6577 - val_loss: 0.8021 - val_accuracy: 0.6473 Epoch 2/5 53/53 [==============================] - 35s 664ms/step - loss: 0.6535 - accuracy: 0.7465 - val_loss: 0.6616 - val_accuracy: 0.7855 Epoch 3/5 53/53 [==============================] - 36s 687ms/step - loss: 0.5520 - accuracy: 0.7969 - val_loss: 0.6763 - val_accuracy: 0.7476 Epoch 4/5 53/53 [==============================] - 37s 710ms/step - loss: 0.5081 - accuracy: 0.8134 - val_loss: 0.5708 - val_accuracy: 0.7947 Epoch 5/5 53/53 [==============================] - 37s 696ms/step - loss: 0.4665 - accuracy: 0.8318 - val_loss: 0.5803 - val_accuracy: 0.7943 Epoch 1/50 53/53 [==============================] - 37s 696ms/step - loss: 0.4347 - accuracy: 0.8436 - val_loss: 0.5360 - val_accuracy: 0.8065 Epoch 2/50 53/53 [==============================] - 36s 683ms/step - loss: 0.4066 - accuracy: 0.8524 - val_loss: 0.5173 - val_accuracy: 0.8247 Epoch 3/50 53/53 [==============================] - 37s 694ms/step - loss: 0.3824 - accuracy: 0.8595 - val_loss: 0.4815 - val_accuracy: 0.8351 Epoch 4/50 53/53 [==============================] - 37s 702ms/step - loss: 0.3546 - accuracy: 0.8708 - val_loss: 0.5178 - val_accuracy: 0.8144 Epoch 5/50 53/53 [==============================] - 38s 710ms/step - loss: 0.3298 - accuracy: 0.8811 - val_loss: 0.4763 - val_accuracy: 0.8357 Epoch 6/50 53/53 [==============================] - 37s 696ms/step - loss: 0.3166 - accuracy: 0.8853 - val_loss: 0.4432 - val_accuracy: 0.8481 Epoch 7/50 53/53 [==============================] - 38s 712ms/step - loss: 0.3022 - accuracy: 0.8925 - val_loss: 0.4128 - val_accuracy: 0.8569 Epoch 8/50 53/53 [==============================] - 38s 711ms/step - loss: 0.2730 - accuracy: 0.9055 - val_loss: 0.3851 - val_accuracy: 0.8739 Epoch 9/50 53/53 [==============================] - 36s 690ms/step - loss: 0.2565 - accuracy: 0.9101 - val_loss: 0.3733 - val_accuracy: 0.8748 Epoch 10/50 53/53 [==============================] - 38s 717ms/step - loss: 0.2451 - accuracy: 0.9145 - val_loss: 0.3776 - val_accuracy: 0.8858 Epoch 11/50 53/53 [==============================] - 36s 688ms/step - loss: 0.2334 - accuracy: 0.9207 - val_loss: 0.3768 - val_accuracy: 0.8718 Epoch 12/50 53/53 [==============================] - 38s 725ms/step - loss: 0.2226 - accuracy: 0.9230 - val_loss: 0.3655 - val_accuracy: 0.8830 Epoch 13/50 53/53 [==============================] - 37s 700ms/step - loss: 0.2150 - accuracy: 0.9264 - val_loss: 0.3853 - val_accuracy: 0.8776 Epoch 14/50 53/53 [==============================] - 36s 687ms/step - loss: 0.2077 - accuracy: 0.9297 - val_loss: 0.3814 - val_accuracy: 0.8897 Epoch 15/50 53/53 [==============================] - 38s 717ms/step - loss: 0.2003 - accuracy: 0.9310 - val_loss: 0.3497 - val_accuracy: 0.8867 Epoch 16/50 53/53 [==============================] - 37s 700ms/step - loss: 0.1970 - accuracy: 0.9303 - val_loss: 0.3773 - val_accuracy: 0.8852 Epoch 17/50 53/53 [==============================] - 37s 694ms/step - loss: 0.1901 - accuracy: 0.9355 - val_loss: 0.3163 - val_accuracy: 0.9095 Epoch 18/50 53/53 [==============================] - 37s 694ms/step - loss: 0.1854 - accuracy: 0.9366 - val_loss: 0.3229 - val_accuracy: 0.9055 Epoch 19/50 53/53 [==============================] - 37s 694ms/step - loss: 0.1737 - accuracy: 0.9426 - val_loss: 0.3333 - val_accuracy: 0.9061 Epoch 20/50 53/53 [==============================] - 38s 717ms/step - loss: 0.1781 - accuracy: 0.9401 - val_loss: 0.3237 - val_accuracy: 0.9092 Epoch 21/50 53/53 [==============================] - 37s 698ms/step - loss: 0.1674 - accuracy: 0.9422 - val_loss: 0.3546 - val_accuracy: 0.9007 Validation accuracies by epoch [0.8065006136894226, 0.8247265815734863, 0.8350546956062317, 0.8143985271453857, 0.8356621861457825, 0.8481166362762451, 0.856925904750824, 0.8739368319511414, 0.8748481273651123, 0.8857837319374084, 0.8718104362487793, 0.8830498456954956, 0.8775820136070251, 0.8897326588630676, 0.8866950273513794, 0.8851761817932129, 0.9094775319099426, 0.9055285453796387, 0.9061360955238342, 0.9091737270355225, 0.9006682634353638] Best validation accuracy is: 0.9094775319099426
#
#
inputs1 = keras.layers.Input(shape=(80,3))
h1conv1 = keras.layers.Conv1D(filters=100, kernel_size=10, activation='relu')(inputs1)
h1conv2 = keras.layers.Conv1D(filters=50, kernel_size=10, activation='relu')(h1conv1)
h1pool1 = keras.layers.MaxPooling1D(3)(h1conv2)
h1glob1 = keras.layers.GlobalAveragePooling1D()(h1pool1)
h1drop1 = keras.layers.Dropout(0.5)(h1glob1)
#
inputs2 = keras.layers.Input(shape=(80,3))
h2conv1 = keras.layers.Conv1D(filters=100, kernel_size=20, activation='relu')(inputs2)
h2conv2 = keras.layers.Conv1D(filters=50, kernel_size=20, activation='relu')(h2conv1)
h2pool1 = keras.layers.MaxPooling1D(3)(h2conv2)
h2glob1 = keras.layers.GlobalAveragePooling1D()(h2pool1)
h2drop1 = keras.layers.Dropout(0.5)(h2glob1)
#
inputs3 = keras.layers.Input(shape=(80,3))
h3conv1 = keras.layers.Conv1D(filters=100, kernel_size=25, activation='relu')(inputs3)
h3conv2 = keras.layers.Conv1D(filters=50, kernel_size=25, activation='relu')(h3conv1)
h3pool1 = keras.layers.MaxPooling1D(3)(h3conv2)
h3glob1 = keras.layers.GlobalAveragePooling1D()(h3pool1)
h3drop1 = keras.layers.Dropout(0.5)(h3glob1)
#
# Concatentate the output of the above two branches
merged = keras.layers.concatenate([h1drop1,h2drop1,h3drop1])
dense1 = keras.layers.Dense(100, activation='relu')(merged)
outputs = keras.layers.Dense(6, activation='softmax')(dense1)
#
# Now define the model
model_m = keras.models.Model(inputs=[inputs1, inputs2, inputs3], outputs=outputs)
print(model_m.summary())
Model: "model_9"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_25 (InputLayer) [(None, 80, 3)] 0
__________________________________________________________________________________________________
input_26 (InputLayer) [(None, 80, 3)] 0
__________________________________________________________________________________________________
input_27 (InputLayer) [(None, 80, 3)] 0
__________________________________________________________________________________________________
conv1d_54 (Conv1D) (None, 71, 100) 3100 input_25[0][0]
__________________________________________________________________________________________________
conv1d_56 (Conv1D) (None, 61, 100) 6100 input_26[0][0]
__________________________________________________________________________________________________
conv1d_58 (Conv1D) (None, 56, 100) 7600 input_27[0][0]
__________________________________________________________________________________________________
conv1d_55 (Conv1D) (None, 62, 50) 50050 conv1d_54[0][0]
__________________________________________________________________________________________________
conv1d_57 (Conv1D) (None, 42, 50) 100050 conv1d_56[0][0]
__________________________________________________________________________________________________
conv1d_59 (Conv1D) (None, 32, 50) 125050 conv1d_58[0][0]
__________________________________________________________________________________________________
max_pooling1d_25 (MaxPooling1D) (None, 20, 50) 0 conv1d_55[0][0]
__________________________________________________________________________________________________
max_pooling1d_26 (MaxPooling1D) (None, 14, 50) 0 conv1d_57[0][0]
__________________________________________________________________________________________________
max_pooling1d_27 (MaxPooling1D) (None, 10, 50) 0 conv1d_59[0][0]
__________________________________________________________________________________________________
global_average_pooling1d_25 (Gl (None, 50) 0 max_pooling1d_25[0][0]
__________________________________________________________________________________________________
global_average_pooling1d_26 (Gl (None, 50) 0 max_pooling1d_26[0][0]
__________________________________________________________________________________________________
global_average_pooling1d_27 (Gl (None, 50) 0 max_pooling1d_27[0][0]
__________________________________________________________________________________________________
dropout_25 (Dropout) (None, 50) 0 global_average_pooling1d_25[0][0]
__________________________________________________________________________________________________
dropout_26 (Dropout) (None, 50) 0 global_average_pooling1d_26[0][0]
__________________________________________________________________________________________________
dropout_27 (Dropout) (None, 50) 0 global_average_pooling1d_27[0][0]
__________________________________________________________________________________________________
concatenate_8 (Concatenate) (None, 150) 0 dropout_25[0][0]
dropout_26[0][0]
dropout_27[0][0]
__________________________________________________________________________________________________
dense_22 (Dense) (None, 100) 15100 concatenate_8[0][0]
__________________________________________________________________________________________________
dense_23 (Dense) (None, 6) 606 dense_22[0][0]
==================================================================================================
Total params: 307,656
Trainable params: 307,656
Non-trainable params: 0
__________________________________________________________________________________________________
None
patience = 4
callbacks_list = [
keras.callbacks.ModelCheckpoint(
filepath='best_model.har_cnn.h5',
monitor='val_loss', save_best_only=True),
keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience)
]
model_m.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
BATCH_SIZE = 400
EPOCHS = 50
history = model_m.fit([x_train,x_train,x_train],
y_train_hot,
batch_size=BATCH_SIZE,
epochs=5,
callbacks=[],
validation_data=([x_test,x_test,x_test], y_test_hot),
verbose=1)
history = model_m.fit([x_train,x_train,x_train],
y_train_hot,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=callbacks_list,
validation_data=([x_test,x_test,x_test], y_test_hot),
verbose=1)
print("Validation accuracies by epoch ",history.history["val_accuracy"])
best_val_acc = history.history['val_accuracy'][-(patience+1)]
print("Best validation accuracy is:",best_val_acc)
Epoch 1/5 53/53 [==============================] - 36s 667ms/step - loss: 0.9707 - accuracy: 0.6580 - val_loss: 0.7911 - val_accuracy: 0.6704 Epoch 2/5 53/53 [==============================] - 36s 677ms/step - loss: 0.6426 - accuracy: 0.7690 - val_loss: 0.6034 - val_accuracy: 0.7937 Epoch 3/5 53/53 [==============================] - 33s 628ms/step - loss: 0.5161 - accuracy: 0.8167 - val_loss: 0.5230 - val_accuracy: 0.8065 Epoch 4/5 53/53 [==============================] - 35s 665ms/step - loss: 0.4639 - accuracy: 0.8324 - val_loss: 0.5082 - val_accuracy: 0.8141 Epoch 5/5 53/53 [==============================] - 35s 669ms/step - loss: 0.4265 - accuracy: 0.8422 - val_loss: 0.5200 - val_accuracy: 0.8065 Epoch 1/50 53/53 [==============================] - 35s 660ms/step - loss: 0.4038 - accuracy: 0.8523 - val_loss: 0.5171 - val_accuracy: 0.8062 Epoch 2/50 53/53 [==============================] - 34s 642ms/step - loss: 0.3746 - accuracy: 0.8627 - val_loss: 0.4779 - val_accuracy: 0.8226 Epoch 3/50 53/53 [==============================] - 34s 644ms/step - loss: 0.3442 - accuracy: 0.8716 - val_loss: 0.4847 - val_accuracy: 0.8232 Epoch 4/50 53/53 [==============================] - 35s 656ms/step - loss: 0.3266 - accuracy: 0.8821 - val_loss: 0.4563 - val_accuracy: 0.8299 Epoch 5/50 53/53 [==============================] - 35s 652ms/step - loss: 0.3181 - accuracy: 0.8840 - val_loss: 0.4596 - val_accuracy: 0.8378 Epoch 6/50 53/53 [==============================] - 34s 648ms/step - loss: 0.2976 - accuracy: 0.8942 - val_loss: 0.4314 - val_accuracy: 0.8493 Epoch 7/50 53/53 [==============================] - 34s 650ms/step - loss: 0.2631 - accuracy: 0.9074 - val_loss: 0.4796 - val_accuracy: 0.8490 Epoch 8/50 53/53 [==============================] - 35s 654ms/step - loss: 0.2554 - accuracy: 0.9091 - val_loss: 0.4262 - val_accuracy: 0.8569 Epoch 9/50 53/53 [==============================] - 35s 656ms/step - loss: 0.2397 - accuracy: 0.9174 - val_loss: 0.4258 - val_accuracy: 0.8594 Epoch 10/50 53/53 [==============================] - 35s 656ms/step - loss: 0.2268 - accuracy: 0.9225 - val_loss: 0.3836 - val_accuracy: 0.8733 Epoch 11/50 53/53 [==============================] - 34s 642ms/step - loss: 0.2200 - accuracy: 0.9254 - val_loss: 0.4153 - val_accuracy: 0.8645 Epoch 12/50 53/53 [==============================] - 34s 637ms/step - loss: 0.2071 - accuracy: 0.9288 - val_loss: 0.3880 - val_accuracy: 0.8776 Epoch 13/50 53/53 [==============================] - 35s 664ms/step - loss: 0.1982 - accuracy: 0.9307 - val_loss: 0.3758 - val_accuracy: 0.8928 Epoch 14/50 53/53 [==============================] - 34s 646ms/step - loss: 0.1907 - accuracy: 0.9347 - val_loss: 0.3984 - val_accuracy: 0.8645 Epoch 15/50 53/53 [==============================] - 35s 654ms/step - loss: 0.1838 - accuracy: 0.9387 - val_loss: 0.3972 - val_accuracy: 0.8821 Epoch 16/50 53/53 [==============================] - 34s 646ms/step - loss: 0.1783 - accuracy: 0.9384 - val_loss: 0.3383 - val_accuracy: 0.9137 Epoch 17/50 53/53 [==============================] - 35s 654ms/step - loss: 0.1676 - accuracy: 0.9441 - val_loss: 0.3633 - val_accuracy: 0.9031 Epoch 18/50 53/53 [==============================] - 34s 646ms/step - loss: 0.1647 - accuracy: 0.9451 - val_loss: 0.3811 - val_accuracy: 0.8837 Epoch 19/50 53/53 [==============================] - 35s 660ms/step - loss: 0.1627 - accuracy: 0.9454 - val_loss: 0.3563 - val_accuracy: 0.8949 Epoch 20/50 53/53 [==============================] - 34s 642ms/step - loss: 0.1538 - accuracy: 0.9483 - val_loss: 0.4033 - val_accuracy: 0.8776 Validation accuracies by epoch [0.8061968684196472, 0.822600245475769, 0.8232077956199646, 0.8298906683921814, 0.8377885818481445, 0.8493317365646362, 0.8490279316902161, 0.856925904750824, 0.8593559861183167, 0.8733292818069458, 0.8645200729370117, 0.8775820136070251, 0.8927703499794006, 0.8645200729370117, 0.8821384906768799, 0.913730263710022, 0.9030984044075012, 0.8836573362350464, 0.8948967456817627, 0.8775820136070251] Best validation accuracy is: 0.913730263710022